The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.
translated by 谷歌翻译
Despite high global prevalence of hepatic steatosis, no automated diagnostics demonstrated generalizability in detecting steatosis on multiple international datasets. Traditionally, hepatic steatosis detection relies on clinicians selecting the region of interest (ROI) on computed tomography (CT) to measure liver attenuation. ROI selection demands time and expertise, and therefore is not routinely performed in populations. To automate the process, we validated an existing artificial intelligence (AI) system for 3D liver segmentation and used it to purpose a novel method: AI-ROI, which could automatically select the ROI for attenuation measurements. AI segmentation and AI-ROI method were evaluated on 1,014 non-contrast enhanced chest CT images from eight international datasets: LIDC-IDRI, NSCLC-Lung1, RIDER, VESSEL12, RICORD-1A, RICORD-1B, COVID-19-Italy, and COVID-19-China. AI segmentation achieved a mean dice coefficient of 0.957. Attenuations measured by AI-ROI showed no significant differences (p = 0.545) and a reduction of 71% time compared to expert measurements. The area under the curve (AUC) of the steatosis classification of AI-ROI is 0.921 (95% CI: 0.883 - 0.959). If performed as a routine screening method, our AI protocol could potentially allow early non-invasive, non-pharmacological preventative interventions for hepatic steatosis. 1,014 expert-annotated liver segmentations of patients with hepatic steatosis annotations can be downloaded here: https://drive.google.com/drive/folders/1-g_zJeAaZXYXGqL1OeF6pUjr6KB0igJX.
translated by 谷歌翻译
域适应性(DA)旨在转移标记良好的源域的知识,以促进未标记的目标学习。当转向特定的任务,例如室内(Wi-Fi)本地化时,必须学习跨域回归剂以减轻域移位。本文提出了一种新颖的方法对抗性双向反应器网络(ABRNET),以寻求更有效的跨域回归模型。具体而言,开发了差异的双向试剂架构,以最大化双向试验的差异,以发现远离源分布的不确定目标实例,然后在特征提取器和双回归器之间采用了对抗性训练机制,以产生域内不变的表示。为了进一步弥合大域间隙,设计了一个特定域的增强模块,旨在合成两个源相似和类似的类似中间域,以逐渐消除原始域的不匹配。对两个跨域回归基准的实证研究说明了我们方法解决域自适应回归(DAR)问题的力量。
translated by 谷歌翻译
在本文中,我们介绍了一项新任务,口语视频接地(SVG),旨在将口语描述中所需的视频片段定位。与使用文本相比,使用音频需要模型直接利用与原始语音视频相关的有用音素和音节。此外,我们在语音音频中随机添加环境声音,进一步增加了此任务的困难并更好地模拟真实应用程序。为了纠正歧视性音素并从嘈杂的音频中提取与视频相关的信息,我们在音频预训练过程中开发了一种新颖的视频指导课程学习(VGCL),可以利用重要的视觉感知来帮助理解口语语言并抑制外部噪音。考虑到推理期间,模型无法获得地面真实视频片段,我们设计了一种课程策略,该策略将输入视频从地面真相转移到预训练期间的整个视频内容。最后,该模型可以学习如何从整个视频剪辑中提取关键的视觉信息,以帮助了解口语。此外,我们基于ActivityNet收集了第一个大规模口语视频接地数据集,该数据集称为ActivityNet语音数据集。广泛的实验表明,我们提出的视频指导课程学习可以促进预训练过程以获得相互的音频编码器,从而大大促进了口头视频接地任务的性能。此外,我们证明,在嘈杂的声音的情况下,我们的模型优于将视频与ASR转录本扎根的方法,进一步证明了我们课程策略的有效性。
translated by 谷歌翻译
肾脏结构细分是计算机辅助诊断基于手术的肾癌的至关重要但具有挑战性的任务。尽管许多深度学习模型在许多医学图像分割任务中取得了显着的成功,但由于肾脏肿瘤的尺寸可变,肾脏肿瘤及其周围环境之间的歧义范围可变,因此对计算机层析造影血管造影(CTA)图像的肾脏结构的准确分割仍然具有挑战性。 。在本文中,我们在CTA扫描中提出了一个边界感知网络(BA-NET),以分段肾脏,肾脏肿瘤,动脉和静脉。该模型包含共享编码器,边界解码器和分割解码器。两个解码器都采用了多尺度的深度监督策略,这可以减轻肿瘤大小可变的问题。边界解码器在每个量表上产生的边界概率图被用作提高分割特征图的注意。我们在肾脏解析(KIPA)挑战数据集上评估了BA-NET,并通过使用4倍的交叉验证来实现CTA扫描的肾脏结构细分的平均骰子得分为89.65 $ \%$。结果证明了BA-NET的有效性。
translated by 谷歌翻译
基础学习者和集合中的样本(镜头)几乎没有弹出分类器极大地影响了模型性能。当表现不满意时,通常很难理解基本原因并进行改进。为了解决这个问题,我们提出了一种视觉分析方法FSLDIAGNOTOR。考虑到一组基础学习者和一系列射击的样本,我们考虑了两个问题:1)找到一个很好的基础学习者,可以很好地预测样本集; 2)用更多代表性的镜头代替低质量的镜头,以充分代表样品集。我们将两个问题提出为稀疏子集选择,并开发两种选择算法,分别推荐适当的学习者和射击。将矩阵可视化和散点图组合在一起,以解释上下文中推荐的学习者和镜头,并促进用户调整它们。根据调整,该算法更新了建议结果,以进行另一轮改进。进行了两项案例研究,以证明FSLDIAGNOTOR有助于有效地构建一些分类器,并分别将精度提高12%和21%。
translated by 谷歌翻译
根据历史运动序列预测未来的运动是计算机视觉中的一个基本问题,并且在自主驾驶和机器人技术中具有广泛的应用。最近的一些作品表明,图形卷积网络(GCN)有助于对不同关节之间的关系进行建模。但是,考虑到人类运动数据中的变体和各种作用类型,由于解耦的建模策略,很难描绘时空关系的交叉依赖性,这也可能加剧了不足的概括问题。因此,我们提出时空门控速度ADJACENCY GCN(GAGCN)学习对各种作用类型的复杂时空依赖性。具体而言,我们采用门控网络来通过混合候选时空邻接矩阵获得的可训练的自适应邻接矩阵来增强GCN的概括。此外,GAGCN通过平衡时空建模的重量并融合了脱钩时空特征来解决空间和时间的交叉依赖性。对人类360万,积聚和3DPW的广泛实验表明,GAGCN在短期和长期预测中都能达到最先进的表现。
translated by 谷歌翻译
世界目前正在经历持续的传染病大流行病,该传染病是冠状病毒疾病2019(即covid-19),这是由严重的急性呼吸综合征冠状病毒2(SARS-COV-2)引起的。计算机断层扫描(CT)在评估感染的严重程度方面发挥着重要作用,并且还可用于识别这些症状和无症状的Covid-19载体。随着Covid-19患者的累积数量的激增,放射科医师越来越强调手动检查CT扫描。因此,自动化3D CT扫描识别工具的需求量高,因为手动分析对放射科医师耗时,并且它们的疲劳可能导致可能的误判。然而,由于位于不同医院的CT扫描仪的各种技术规范,CT图像的外观可能显着不同,导致许多自动图像识别方法的失败。因此,多域和多扫描仪研究的多域移位问题是不可能对可靠识别和可再现和客观诊断和预后至关重要的至关重要。在本文中,我们提出了Covid-19 CT扫描识别模型即Coronavirus信息融合和诊断网络(CIFD-NET),可以通过新的强大弱监督的学习范式有效地处理多域移位问题。与其他最先进的方法相比,我们的模型可以可靠,高效地解决CT扫描图像中不同外观的问题。
translated by 谷歌翻译
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as Ima-geNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated crossattention models. The representations also enable cross-modality search with complex text and text + image queries.
translated by 谷歌翻译